Goto

Collaborating Authors

 adversarial attack




Boosting Adversarial Transferability by Achieving Flat Local Maxima

Neural Information Processing Systems

Specifically, we randomly sample an example and adopt a first-order procedure to approximate the Hessian/vector product, which makes computing more efficient by interpolating two neighboring gradients.








Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models Ziyi Yin 1 Muchao Y e

Neural Information Processing Systems

Vision-Language (VL) pre-trained models have shown their superiority on many multimodal tasks. However, the adversarial robustness of such models has not been fully explored. Existing approaches mainly focus on exploring the adversarial robustness under the white-box setting, which is unrealistic. In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks.